A powerful embedded machine vision system for all-level human behaviors understanding.
Detect and analyze facial information including identity, gender and even emotion.
Remote control and send commands with your handgestures.
Long-distance body language detection and action recognition.
Human awareness, action recognition
User identity recognition, gesture commanding
Interactive gesture recognition, games
A drone visually tracks a user and understands the user's gesture
The user plays Fruit Ninja on a smart TV using hand gestures.
Deep learning based machine vision system for embedded platforms (Linux, Android and iOS) and standard cameras
Deep learning model based visual recognition capability
Advanced algorithm and optimized solution for real-time applications
Work with standard cameras, no need of special sensors
Process locally on embedded platforms, no privacy data will be transmitted to the cloud
We are a group of robotics and machine learning research scientists. The core team members have experience working closely with NSERC Canadian Field Robotics Network, Disney Research Pittsburgh, Japan National Institute of Informatics, Clearpath Robotics, SAP, Simon Fraser University Autonomy Lab and Vision and Media Lab.